9 research outputs found
Locksynth: Deriving Synchronization Code for Concurrent Data Structures with ASP
We present Locksynth, a tool that automatically derives synchronization
needed for destructive updates to concurrent data structures that involve a
constant number of shared heap memory write operations. Locksynth serves as the
implementation of our prior work on deriving abstract synchronization code.
Designing concurrent data structures involves inferring correct synchronization
code starting with a prior understanding of the sequential data structure's
operations. Further, an understanding of shared memory model and the
synchronization primitives is also required. The reasoning involved
transforming a sequential data structure into its concurrent version can be
performed using Answer Set Programming and we mechanized our approach in
previous work. The reasoning involves deduction and abduction that can be
succinctly modeled in ASP. We assume that the abstract sequential code of the
data structure's operations is provided, alongside axioms that describe
concurrent behavior. This information is used to automatically derive
concurrent code for that data structure, such as dictionary operations for
linked lists and binary search trees that involve a constant number of
destructive update operations. We also are able to infer the correct set of
locks (but not code synthesis) for external height-balanced binary search trees
that involve left/right tree rotations. Locksynth performs the analyses
required to infer correct sets of locks and as a final step, also derives the
C++ synchronization code for the synthesized data structures. We also provide a
performance analysis of the C++ code synthesized by Locksynth with the
hand-crafted versions available from the Synchrobench microbenchmark suite. To
the best of our knowledge, our tool is the first to employ ASP as a backend
reasoner to perform concurrent data structure synthesis
Knowledge-driven Natural Language Understanding of English Text and its Applications
Understanding the meaning of a text is a fundamental challenge of natural
language understanding (NLU) research. An ideal NLU system should process a
language in a way that is not exclusive to a single task or a dataset. Keeping
this in mind, we have introduced a novel knowledge driven semantic
representation approach for English text. By leveraging the VerbNet lexicon, we
are able to map syntax tree of the text to its commonsense meaning represented
using basic knowledge primitives. The general purpose knowledge represented
from our approach can be used to build any reasoning based NLU system that can
also provide justification. We applied this approach to construct two NLU
applications that we present here: SQuARE (Semantic-based Question Answering
and Reasoning Engine) and StaCACK (Stateful Conversational Agent using
Commonsense Knowledge). Both these systems work by "truly understanding" the
natural language text they process and both provide natural language
explanations for their responses while maintaining high accuracy.Comment: Preprint. Accepted by the 35th AAAI Conference (AAAI-21) Main Track
Logic-Based Explainable and Incremental Machine Learning
Mainstream machine learning methods lack interpretability, explainability, incrementality, and data-economy. We propose using logic programming (LP) to rectify these problems. We discuss the FOLD family of rule-based machine learning algorithms that learn models from relational datasets as a set of default rules. These models are competitive with state-of-the-art machine learning systems in terms of accuracy and execution efficiency. We also motivate how logic programming can be useful for theory revision and explanation based learning